Order:
  1.  12
    Is human compositionality meta-learned?Jacob Russin, Sam Whitman McGrath, Ellie Pavlick & Michael J. Frank - 2024 - Behavioral and Brain Sciences 47:e162.
    Recent studies suggest that meta-learning may provide an original solution to an enduring puzzle about whether neural networks can explain compositionality – in particular, by raising the prospect that compositionality can be understood as an emergent property of an inner-loop learning algorithm. We elaborate on this hypothesis and consider its empirical predictions regarding the neural mechanisms and development of human compositionality.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2.  33
    Properties of LoTs: The footprints or the bear itself?Sam Whitman McGrath, Jacob Russin, Ellie Pavlick & Roman Feiman - 2023 - Behavioral and Brain Sciences 46:e284.
    There are two ways to understand any proposed properties of language-of-thoughts (LoTs): As diagnostic or constitutive. We argue that this choice is critical. If candidate properties are diagnostic, their homeostatic clustering requires explanation via an underlying homeostatic mechanism. If constitutive, there is no clustering, only the properties themselves. Whether deep neural networks (DNNs) are alternatives to LoTs or potential implementations turn on this choice.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark